human hearing
The Advancements of AI Models Mimicking Human Hearing
Speech Recognition Models are designed to recognize and transcribe spoken language into text. This technology is widely used in virtual assistants such as Amazon's Alexa, Google Assistant, and Apple's Siri. These models are trained on large datasets of audio recordings and use machine learning algorithms such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) to make predictions. Google's Speech-to-Text, Amazon's Transcribe, and Microsoft's Azure Speech Services are some of the popular examples of Speech Recognition Models. Sound Event Detection Models, on the other hand, are designed to recognize specific sounds or events within an audio clip.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.96)
- Information Technology > Artificial Intelligence > Speech > Speech Recognition (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.59)
Technology: Past, Present and The Deep Blue Sea
Around 15,000–20,000 years ago, as the last ice age began to recede, human populations transitioned from a way of life that depended on hunting and gathering to one that favored agriculture. This time period, known as the Neolithic period, would be marked by a sharp increase in population, increased complexity of social and political interactions, and the emergence of technology as we know it today. The era dragged along and technology slowly evolved in response to the most basic social needs of food and shelter, until about 5000 years ago when the Urban Revolution began. Before the Urban revolution, technology had existed outside the concept of "science", as it simply referred to manipulating our world (the elements, our environment, and so forth) to aid our continued survival. However, as the first astronomers in Mesopotamia--the birthplace of civilization--used data from the movements of celestial bodies to establish calendars and create irrigation systems, a creative partnership between science and technology came to the fore.
New app could detect Covid-19 from a cough with 98% accuracy
Scientists have created an algorithm that can accurately diagnose people with Covid-19 just by the sound of their cough. DeepCough, created by experts from the University of Essex, was built using 8,380 clinically-validated audio samples of people coughing. The samples were taken from hospitals in Spain and Mexico since April last year –2,339 who tested positive and 6,041 who tested negative for Covid-19. The researchers say the algorithm was able to detect with 98 per cent accuracy whether or not the samples were from people infected. Their algorithm would underpin an app that could herald a quicker, cheaper and less invasive way of preliminary testing for the virus, according to the experts. The two leading tests for detecting Covid-19 – antigen detection and PCR – involve swabs of bodily fluid, but the app could be rolled out for iOS and Android and potentially provide a way for people to self-diagnose.
- North America > Mexico (0.25)
- Europe > Spain (0.25)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.08)